45 research outputs found

    BoXHED2.0: Scalable boosting of dynamic survival analysis

    Full text link
    Modern applications of survival analysis increasingly involve time-dependent covariates. In healthcare settings, such covariates provide dynamic patient histories that can be used to assess health risks in realtime by tracking the hazard function. Hazard learning is thus particularly useful in healthcare analytics, and the open-source package BoXHED 1.0 provides the first implementation of a gradient boosted hazard estimator that is fully nonparametric. This paper introduces BoXHED 2.0, a quantum leap over BoXHED 1.0 in several ways. Crucially, BoXHED 2.0 can deal with survival data that goes far beyond right-censoring and it also supports recurring events. To our knowledge, this is the only nonparametric machine learning implementation that is able to do so. Another major improvement is that BoXHED 2.0 is orders of magnitude more scalable, due in part to a novel data preprocessing step that sidesteps the need for explicit quadrature when dealing with time-dependent covariates. BoXHED 2.0 supports the use of GPUs and multicore CPUs, and is available from GitHub: www.github.com/BoXHED.Comment: 12 page

    Can smartwatches replace smartphones for posture tracking?

    Get PDF
    This paper introduces a human posture tracking platform to identify the human postures of sitting, standing or lying down, based on a smartwatch. This work develops such a system as a proof-of-concept study to investigate a smartwatch's ability to be used in future remote health monitoring systems and applications. This work validates the smartwatches' ability to track the posture of users accurately in a laboratory setting while reducing the sampling rate to potentially improve battery life, the first steps in verifying that such a system would work in future clinical settings. The algorithm developed classifies the transitions between three posture states of sitting, standing and lying down, by identifying these transition movements, as well as other movements that might be mistaken for these transitions. The system is trained and developed on a Samsung Galaxy Gear smartwatch, and the algorithm was validated through a leave-one-subject-out cross-validation of 20 subjects. The system can identify the appropriate transitions at only 10 Hz with an F-score of 0.930, indicating its ability to effectively replace smart phones, if needed

    Self-supervised contrastive learning of echocardiogram videos enables label-efficient cardiac disease diagnosis

    Full text link
    Advances in self-supervised learning (SSL) have shown that self-supervised pretraining on medical imaging data can provide a strong initialization for downstream supervised classification and segmentation. Given the difficulty of obtaining expert labels for medical image recognition tasks, such an "in-domain" SSL initialization is often desirable due to its improved label efficiency over standard transfer learning. However, most efforts toward SSL of medical imaging data are not adapted to video-based medical imaging modalities. With this progress in mind, we developed a self-supervised contrastive learning approach, EchoCLR, catered to echocardiogram videos with the goal of learning strong representations for efficient fine-tuning on downstream cardiac disease diagnosis. EchoCLR leverages (i) distinct videos of the same patient as positive pairs for contrastive learning and (ii) a frame re-ordering pretext task to enforce temporal coherence. When fine-tuned on small portions of labeled data (as few as 51 exams), EchoCLR pretraining significantly improved classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS) over other transfer learning and SSL approaches across internal and external test sets. For example, when fine-tuning on 10% of available training data (519 studies), an EchoCLR-pretrained model achieved 0.72 AUROC (95% CI: [0.69, 0.75]) on LVH classification, compared to 0.61 AUROC (95% CI: [0.57, 0.64]) with a standard transfer learning approach. Similarly, using 1% of available training data (53 studies), EchoCLR pretraining achieved 0.82 AUROC (95% CI: [0.79, 0.84]) on severe AS classification, compared to 0.61 AUROC (95% CI: [0.58, 0.65]) with transfer learning. EchoCLR is unique in its ability to learn representations of medical videos and demonstrates that SSL can enable label-efficient disease classification from small, labeled datasets

    DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and Temporal Relatedness

    Full text link
    In wearable sensing applications, data is inevitable to be irregularly sampled or partially missing, which pose challenges for any downstream application. An unique aspect of wearable data is that it is time-series data and each channel can be correlated to another one, such as x, y, z axis of accelerometer. We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors. We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis and then feeding the data into a LSTM-based denoising autoencoder which can reconstruct missingness along the time axis. We experiment the model on the extreme missingness scenario (>50%>50\% missing rate) which has not been widely tested in wearable data. Our experiments on activity recognition show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.Comment: 5 pages, 2 figures, accepted in ICASSP'202
    corecore